This report documents the internal quality control (QC) process of DNA methylation data generated at the University of Exeter Medical School for the following study:
Study: MRC Schizophrenia FANS samples
Arrays ran by: Complex Disease Epigenetic Group, University of Exeter Medical School
Array used: V2
Date of QC: 16 April 2025
Most recent BrainFANS version: v1.1.1 (specific commit hash used: 85c603404013d06f22afce656301f7ffea3adf69)
Sample tissue: brain
Project datapath: /lustre/projects/Research_Project-T127716/Morteza/pipeline_modified/IPSC
Data was loaded for 60 samples from 60 individuals.
These are only included if they are categorical. For this study the following project variables were included:
A series of quality control (QC) metrics have been calculated for all samples and are reported below. After reviewing this report, exclusion thresholds to identify poorly performing samples can be provided to the normalisation script. For some QC metrics we use the provided project variables to aid identification of any patterns behind sample failures or artefacts that need to be included in the analysis.
Previous experience has shown that intensity level indicates sample quality and likelihood of passing the QC process. This is summarised for each sample by calculating the median of the methylated signal intensity and unmethylated signal intensity. In the histograms below we would ideally like to see a single distribution (typically approximately normal) with a single peak. The vertical red lines indicate median intentisty of 500. 0 samples with really low intensity values (< 500) are dropped at this stage and not considered in any of the subsequent QC steps.
In order to decide a threshold to filter samples by, we will trying fitting a bimodal distribution to the distribution of intensities. The idea here being that there will be a distribution forthe failed samples and a distribution for the passed samples.
## number of iterations= 2
## number of iterations= 58
Differences in the ratio between M and U values will lead to differences in the distribution of beta values. In the histogram below we are looking for evidence of multiple distributions or a non-unimodal distribution. Hartigans dip test for unimodality / multimodality, indicates we can reject the alternative hypothesis of multiple modes (P = 0.981).
In this dataset 0 fully methylated control samples were included to check that plates were orientated correctly. From the ratio of M intensities to U intensities 0 fully methylated control probes were detected, 0 (NaN%) fully methylated controls were identified in the correct position.
For each sample a bisulfite conversion statistic is calculated as the median value across 8 fully methylated control probes. We apply a threshold of 80% excluding samples below this threshold. In this dataset 0 (0 %) samples fail at this threshold.
For each sample the number of missing values was calculated. The vertical line indicates 2% of sites, 0 (0 %) samples have more sites with missing data than this threshold.
As recommended by Lehne et al. we performed principal component (PC) analysis of the control probes to identify batch effects and poorly performing samples. We identified 4 PCs which explained > 1% of the variance and focused on these for characterisation.
Here we plot histograms of each PC to check for the presence of outliers. In the histograms below, the red dashed lines indicate 2 and 3 SD from the mean.
Here we will compare each of these PC against other techical variables to see if they are picking up similar issues.
Here we plot scatterplots of each PC against PC1 to check for the presence of outliers.
Detection p values provide a measure of the accuracy of DNAm value at a specific probe for a specific sample above background noise and is performed at both a sample and probe level. For each sample we calculate the number of sites where the signal is not detectable above the background. Samples with a high percentage of these sites are excluded. In this data 0 samples are recommended for removal. For each probe we calculate the percentage of samples where the signal is not detectable above the background. Probes with a high percentage of samples are excluded. In this data 9546 probes are recommended for removal. Additionally, probes with a high percentage of samples with a beadcount less than 3 are excluded. In this data 71718 probes are recommended for removal.
To identify and visually inspect potential outliers we performed principal component analysis on the autosomal probes. We identified 5 PCs which explained > 1% of the variance and focused on these for characterisation.
We will use histograms and scatterplots below to visually inspect for potential outliers or patterns in the data. In the histograms below, the red dashed lines indicate 2 and 3 SD from the mean.
The goal of normalisation is to convert data for each sample onto a common distribution and minimise effects of technical variation. Outlier samples will need a high level of manipulation to transform them to look more similar to the rest of the sample. To identify samples that are dramatically altered as a result of normalization the quantified the difference between the normalized and raw data at each probe for each sample calculating the root mean square. Here we have apply a threshold of 0.1 to exclude samples.
This step uses the 59 SNP probes on the array to identify genetically identical samples. For all pairs of samples a correlation statistic across these probes is calculated. If there are correlations greater than 0.8 (highlighted by the red vertical line), this suggests the presence of genetically identical samples.
We identified 1 genetically unique individuals with the following distribution of number of samples per individual.
We can compare this to the expected distribution of number of samples per individuals based on the sample sheet.
These correlations can be visualised in the following heatmap, where the colours indicate samples from the same individual.
Checking if genetically identical samples have the sample Individual ID:
## [1] "ERROR: Genetically identical samples had different individual IDs. See heatmap below for potential mismatches and output file WithinDNAmGeneticMismatches.csv for details"
Checking if samples with same Individual ID are genetically identical:
## [1] "All samples labelled as the same individual are genetically identical"
Using the intensity values from probes located on the X and Y chromosomes, we calculate a fold change relative to intensity values from the autosomes. In females you would expect the fold change on the X chromosome to be greater than 1 and the Y chromosome less than 1, while males we would expect the fold change on the X chromosome to be less than 1 and the fold change on the Y chromosome to be greater than 1. Based on these assumptions we can predict male or female from each sex chromosome.
Let’s look at the distribution of these fold change statistics. If you have > 1 sex you should see two peaks. You sometimes also see a third peak around 0, these are poor quality samples. The grey area on the graph highlights were the data are two ambigous to make a sex prediction.
In this dataset 60 samples did not have sex predicted.
We will compare the sex predictions from the X and Y chromosomes.
Perform sex check against phenotype reported sexes: FALSE.
Perform concordance check against genotype data: FALSE.
Samples will now be filtered based on thresholds defined in the config file. The table below, summarises the number of samples that fail each QC step.
Having excluding poor quality data and sample swaps we will repeat the genetic clustering.
Post QC we identified 1 genetically unique individuals with the following distribution of number of samples per individual.
Checking if genetically identical samples have the sample Individual ID:
## [1] "ERROR: Genetically identical samples had different individual IDs."
Checking if samples with same Individual ID are genetically identical:
## [1] "All samples labelled as the same individual are genetically identical"
Perform cell type label check: TRUE.
After performing quality control (QC) across all samples (n = 60), 0 samples were excluded after stages 1 and 2 of the QC. The third stage of the QC pipeline will look to establish the success of the FACs sorting and check whether samples cluster by their labelled cell type.
| Profiled | PassQCStage1 | |
|---|---|---|
| iMicro | 54 | 54 |
| iPSC | 6 | 6 |
First, we will produce a heatmap and hierarchical cluster across all samples that passed stages 1 and 2 of the QC to gauge how cleanly the cell-types are clustering. The following heatmap is on based on 500 most variable autosomal probes (ranked by SD), we will force it to cut the hierarchical cluster into the same number of groups as cell types.
We will use principal components to classify samples into cell types and determine if the profile of the sample is representative of the sample it is labelled as. The figure below shows the PCA before any checking of cell type.
We will also visualize which axes of variation separate the different cell types. Note that at this stage, we haven’t filtered out any incorrectly labelled samples so there may still be some noise.
In the following sections we will implement steps to confirm a sample clusters with it’s group. As in general the samples cluster by cell type, we can use these data to do an internal prediction of what cell type each sample is most similar to. Given that the principal components parsimoneously capture the relevent axes of cell type variation, we will use them as the basis of this validation. To do this we need to “learn” the typical profile of each cell type. For this process to work we are assuming that a sufficiently high proportion of the samples are labelled correctly. In case there are any samples which are incorrectly labelled, that will distort the average profile, we will first studentize the values to identify outliers for each cell type. These outliers will then be excluded from the calculation of the average profile for each cell type.
Below are boxplots of the within cell type studentized PCs. The dashed horizontal lines represent different possible thresholds. Cell-type specific correlations, indicate this statistical method is working. It is worthwhile being stringent here such that we get an accurate representative profile that will enable us to more cleanly predict which cell type a sample is most similar to.
We can look at which samples are included to define the representative profile as we apply more stringent thresholds. This should “clean” up the previous PCA plot.
#par(mfrow = c(2,2))
#for(i in 1:ncol(pcaClassify$withinSDMean)){
# plot(betas.scores[,1], betas.scores[,2], pch = #c(3,16)[as.factor(pcaClassify$withinSDMean[,i])], col = #cellCols[as.factor(QCmetrics$Cell_Type)], xlab = "PC1", ylab = "PC2", cex = 1.2, #cex.lab = 1.5, cex.axis = 1.5)
#}
#legend("topleft", pch = 16, col = cellCols, #levels(as.factor(QCmetrics$Cell_Type)))
par(mfrow = c(2,2))
for(i in 1:ncol(pcaClassify$withinSDMean)){
keep<-which(QCmetrics$maxStudentPCA < c(1,1.5,2,3)[i])
plot(betas.scores[keep,1], betas.scores[keep,2], pch = 16, col = cellCols[as.factor(QCmetrics$Cell_Type)][keep], xlab = "PC1", ylab = "PC2", cex = 1.2, cex.lab = 1.5, cex.axis = 1.5, main = paste0("Studentized threshold +/- ", c(1,1.5,2,3)[i]))
}
legend("bottomright", pch = 16, col = cellCols, levels(as.factor(QCmetrics$Cell_Type)))
Here we can see how many samples of each type are excluded from this analysis.
| Threshold | iMicro | iPSC |
|---|---|---|
| 1.0 | 25 | 2 |
| 1.5 | 11 | 2 |
| 2.0 | 4 | 1 |
| 3.0 | 1 | 1 |
For individuals, where the FACs sorting did not generate distinct cellular populations, this will disrupt the prediction of for all cell types. Therefore we will first identify these individuals and exclude them.
For each sample we can determine how similar it is to the average profile for the labelled cell type. We can combine the sample level scores into an individual level score, which is used to rank the FACs sorting. Below we will visualise the individual level FAC efficiency scores.
uniqueIDs<-read.csv(paste0(qcOutFolder, "/IndividualFACsEffciencyScores.csv"), row.names = 1, stringsAsFactors = FALSE)
#boxplot(uniqueIDs$FACsEffiency ~ uniqueIDs$Tissue.Centre, xlab = "", ylab = "Median SD from mean")
#abline(h=nSDThres)
boxplot(uniqueIDs$FACsEffiency ~ uniqueIDs$nFACS, xlab = "Number of fractions", ylab = "Median SD from mean")
abline(h=nSDThres)
Let’s visualise some good examples.
All individuals with score > 5 are excluded as having suboptimal FACs sorting.
badFACS<-uniqueIDs$Individual_ID[which(uniqueIDs$FACsEffiency >= 5)]
par(mfrow = c(3,3))
for(each in badFACS){
index<-which(QCmetrics$Individual_ID == each)
if(length(index > 1)){
plot(betas.scores[index,1], betas.scores[index,2],col = cellCols[as.factor(QCmetrics$Cell_Type)][index], pch = c(18,16)[as.factor(QCmetrics$passCTCheck)][index], xlab = "PC1", ylab = "PC2", cex.axis = 1.5, cex.lab = 1.5, cex = 4, main = each, xlim = x_lim, ylim = y_lim)
for(i in 1:ncol(cellMeanPCA)){
polygon(c(lowerBound[1,i], lowerBound[1,i], upperBound[1,i], upperBound[1,i]), c(lowerBound[2,i],upperBound[2,i],upperBound[2,i],lowerBound[2,i] ), border = cellCols[match(colnames(cellMeanPCA)[i], cellTypes)], lwd = 2, col = cellColsTrans[match(colnames(cellMeanPCA)[i], cellTypes)])
}
mtext(side = 3, adj =1, signif(uniqueIDs$FACsEffiency[match(each, uniqueIDs$Individual_ID)],2))
}
}
In total we identified, 0 individuals with sub optimal FACs sorting and all 0 samples for these individuals were excluded.
kbl(table(QCmetrics$Cell_Type, QCmetrics$passFACS)) %>%
kable_styling(bootstrap_options = c("striped", "hover"), font_size = 10)
| TRUE | |
|---|---|
| iMicro | 54 |
| iPSC | 6 |
The exclusion of these individuals should give us more refined average profiles, we will re-run the outlier detection.
In this approach we will use Mahalanobis distance to compare each sample to all possible cell types. The closest cell type i.e. the smallest Mahalanobis distance, will be assigned to that sample. Mahalanobis distance is multi-dimensional generalization of the idea of measuring how many standard deviations away a point P is from the median of a distribution D. Here the distribution is a cell type and the point is a sample. If the sample looks exactly like the median of a cell type, it’s Mahalanobis distance will be 0, the larger the distance metric, the further away it is. For this method, all samples are assigned a cell type, as they have to be closest to something, but they could still be very dissimilar to the cell type they are assigned. This prediction is not possible for cell types with a limited number of samples.
The table below shows how many samples were predicted the same as their label.In this sample, 91.7% of the samples had DNAm profiles which most closely resemebled the cell type they were labelled. It should be noted that discrepancies between this prediction and the sample label may arise due to 1) poor data quality, 2) insufficient isolation, 3) sample mislabelling, 4) fractions containing overlapping sets of cells.
| FALSE | TRUE | |
|---|---|---|
| iMicro | 0 | 54 |
| iPSC | 5 | 1 |
To visualise the success of this filtering steps, we will repeat the hierarchical cluster from earlier using only the samples whose prediction matched their label.
Here we will look at the correlations between the Mahalanobis distances across samples. This might highlight where common misclassifications could occur.
Below are a series of boxplots to look at the distribution of the distances to gauge how equal the prediction works across cell-types. Each panel plots the distances to a single cell type, where the different boxplots group samples by their labelled cell type.
Given the overlap of different cell types, we define a regular polytope in 2 dimensional space for each cell type. The centre of each polytope is calculated as the mean of the PCs. The polytope then extends 3 SD away from the mean in all dimensions. The geometric position of each sample is compared to each of these regions and if it falls within a polytope it is considered to be representative of that cell type. As it is possible that the regions for two cell types will overlap, a cell type is labelled with all the cell types it overlaps with. With this approach, a sample may fall outside of all cell types and therefore remain unassigned.
tab2<-table(QCmetrics$Cell_Type, QCmetrics$withinSDMean)
percCon2<-sum(QCmetrics$withinSDMean)/sum(QCmetrics$passFACS)*100
The table below shows how many samples were located within the same geometric space as the others in their sample group. In this sample, 90% of the samples had DNAm profiles which closely resembled the cell type they were labelled.
pander(tab2)
| FALSE | TRUE | |
|---|---|---|
| iMicro | 5 | 49 |
| iPSC | 1 | 5 |
To visualize how stringent this filtering is.
## add thresholds more relaxed than selected
lowerBound1<-cellMeanPCA-(nSDThres+0.5)*cellSDPCA
upperBound1<-cellMeanPCA+(nSDThres+0.5)*cellSDPCA
lowerBound2<-cellMeanPCA-(nSDThres+1)*cellSDPCA
upperBound2<-cellMeanPCA+(nSDThres+1)*cellSDPCA
par(mfrow = c(3,2))
for(i in 1:ncol(lowerBound)){
plot(betas.scores[,1], betas.scores[,2], xlab = "PC 1", ylab = "PC 2", type = "n",cex.axis = 2, cex.lab = 2,cex = 1.2, main = colnames(lowerBound)[i], cex.main = 2)
# just plot points labelled as that cell type
sInd<-which(QCmetrics$Cell_Type == colnames(lowerBound)[i])
points(betas.scores[sInd,1], betas.scores[sInd,2], pch = c(4,16)[as.factor(QCmetrics$withinSDMean[sInd])], col = c("grey", "black")[as.factor(QCmetrics$withinSDMean[sInd])])
polygon(c(lowerBound[1,i], lowerBound[1,i], upperBound[1,i], upperBound[1,i]), c(lowerBound[2,i],upperBound[2,i],upperBound[2,i],lowerBound[2,i] ), border = cellCols[i], lwd = 2)
polygon(c(lowerBound1[1,i], lowerBound1[1,i], upperBound1[1,i], upperBound1[1,i]), c(lowerBound1[2,i],upperBound1[2,i],upperBound1[2,i],lowerBound1[2,i] ), border = cellCols[i], lwd = 2, lty = 2)
polygon(c(lowerBound2[1,i], lowerBound2[1,i], upperBound2[1,i], upperBound2[1,i]), c(lowerBound2[2,i],upperBound2[2,i],upperBound2[2,i],lowerBound2[2,i] ), border = cellCols[i], lwd = 2, lty = 2)
}
To visualize the success of this filtering steps, we will repeat the hierarchical cluster from earlier using only the samples whose prediction matched their label.
Samples are classed as passing the cell type check if, based on the first 2 PCs they are within 3 SD of the average profile of their labelled cell type.
All Total samples are retained.
passTab<- table(QCmetrics$passCTCheck, QCmetrics$Cell_Type)
pander(passTab)
| iMicro | iPSC | |
|---|---|---|
| FALSE | 5 | 1 |
| TRUE | 49 | 5 |
We can look at how clean the samples we have retained by looking at how many SD they are from the mean of their cell type. The boxplots below include a) all samples and b) all samples that pass the cell type check.
boxplot(QCmetrics$maxSD ~ QCmetrics$Cell_Type, xlab = "Cell type", ylab = "nSD from mean",col = cellCols)
boxplot(QCmetrics$maxSD[QCmetrics$passCTCheck] ~ QCmetrics$Cell_Type[QCmetrics$passCTCheck], xlab = "Cell type", ylab = "nSD from mean",col = cellCols)
Let’s explore the other QC metrics of samples that are not predicted as their labelled cell type.
par(mfrow = c(1,3))
boxplot(QCmetrics$M.median ~ QCmetrics$predLabelledCellType, ylab = "Median M Intensity", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
boxplot(QCmetrics$U.median ~ QCmetrics$predLabelledCellType, ylab = "Median U Intensity", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
boxplot(QCmetrics$intens.ratio ~ QCmetrics$predLabelledCellType, ylab = "M:U Ratio", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
boxplot(QCmetrics$rmsd ~ QCmetrics$predLabelledCellType, ylab = "Normalisation Violence", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
if("genoCheck" %in% colnames(QCmetrics) ){
boxplot(QCmetrics$genoCheck ~ QCmetrics$predLabelledCellType, ylab = "Correlation with SNP data", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
}
boxplot(QCmetrics$bisulfCon ~ QCmetrics$predLabelledCellType, ylab = "Bisulfite Coversion", xlab = "Concordant Prediction",cex.axis = 2, cex.lab = 2)
Boxplots to separate cell types should also look cleaner.
Where multiple samples from the same individual are identified as not resembling their labelled cell types we will visualise all samples from that individual in an attempt to determine, whether is is a failed antibody or sample mix up.
Built with R version 4.4.3
## R version 4.4.3 (2025-02-28)
## Platform: x86_64-conda-linux-gnu
## Running under: Red Hat Enterprise Linux 9.4 (Plow)
##
## Matrix products: default
## BLAS/LAPACK: /lustre/home/mk693/micromamba/lib/libopenblasp-r0.3.29.so; LAPACK version 3.12.0
##
## locale:
## [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8
## [5] LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8
## [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
##
## time zone: Europe/London
## tzcode source: system (glibc)
##
## attached base packages:
## [1] parallel stats4 stats graphics grDevices utils datasets
## [8] methods base
##
## other attached packages:
## [1] mixtools_2.0.0.1
## [2] data.table_1.17.0
## [3] pheatmap_1.0.12
## [4] RColorBrewer_1.1-3
## [5] kableExtra_1.4.0
## [6] pander_0.6.6
## [7] corrplot_0.95
## [8] diptest_0.77-1
## [9] gplots_3.2.0
## [10] bigmelon_1.32.0
## [11] gdsfmt_1.42.0
## [12] wateRmelon_2.11.4
## [13] illuminaio_0.48.0
## [14] IlluminaHumanMethylation450kanno.ilmn12.hg19_0.6.1
## [15] ROC_1.82.0
## [16] lumi_2.58.0
## [17] methylumi_2.52.0
## [18] minfi_1.52.0
## [19] bumphunter_1.48.0
## [20] locfit_1.5-9.12
## [21] iterators_1.0.14
## [22] foreach_1.5.2
## [23] Biostrings_2.74.0
## [24] XVector_0.46.0
## [25] SummarizedExperiment_1.36.0
## [26] MatrixGenerics_1.18.0
## [27] FDb.InfiniumMethylation.hg19_2.2.0
## [28] org.Hs.eg.db_3.20.0
## [29] TxDb.Hsapiens.UCSC.hg19.knownGene_3.2.2
## [30] GenomicFeatures_1.58.0
## [31] AnnotationDbi_1.68.0
## [32] GenomicRanges_1.58.0
## [33] GenomeInfoDb_1.42.0
## [34] IRanges_2.40.0
## [35] S4Vectors_0.44.0
## [36] ggplot2_3.5.2
## [37] reshape2_1.4.4
## [38] scales_1.3.0
## [39] matrixStats_1.5.0
## [40] limma_3.62.1
## [41] Biobase_2.66.0
## [42] BiocGenerics_0.52.0
##
## loaded via a namespace (and not attached):
## [1] splines_4.4.3 BiocIO_1.16.0
## [3] bitops_1.0-9 tibble_3.2.1
## [5] preprocessCore_1.68.0 XML_3.99-0.17
## [7] lifecycle_1.0.4 lattice_0.22-7
## [9] MASS_7.3-64 base64_2.0.2
## [11] crosstalk_1.2.1 scrime_1.3.5
## [13] magrittr_2.0.3 plotly_4.10.4
## [15] sass_0.4.9 rmarkdown_2.29
## [17] jquerylib_0.1.4 yaml_2.3.10
## [19] doRNG_1.8.6.2 askpass_1.2.1
## [21] DBI_1.2.3 abind_1.4-5
## [23] zlibbioc_1.52.0 quadprog_1.5-8
## [25] purrr_1.0.4 RCurl_1.98-1.16
## [27] GenomeInfoDbData_1.2.13 rentrez_1.2.3
## [29] genefilter_1.88.0 annotate_1.84.0
## [31] svglite_2.1.3 DelayedMatrixStats_1.28.0
## [33] cdegUtilities_0.0.2 codetools_0.2-20
## [35] DelayedArray_0.32.0 DT_0.33
## [37] xml2_1.3.8 tidyselect_1.2.1
## [39] farver_2.1.2 UCSC.utils_1.2.0
## [41] beanplot_1.3.1 GenomicAlignments_1.42.0
## [43] jsonlite_2.0.0 multtest_2.62.0
## [45] survival_3.8-3 systemfonts_1.2.1
## [47] segmented_2.1-4 tools_4.4.3
## [49] Rcpp_1.0.14 glue_1.8.0
## [51] SparseArray_1.6.0 xfun_0.51
## [53] mgcv_1.9-3 dplyr_1.1.4
## [55] HDF5Array_1.34.0 withr_3.0.2
## [57] BiocManager_1.30.25 fastmap_1.2.0
## [59] rhdf5filters_1.18.0 openssl_2.3.2
## [61] caTools_1.18.3 digest_0.6.37
## [63] R6_2.6.1 colorspace_2.1-1
## [65] gtools_3.9.5 RSQLite_2.3.9
## [67] tidyr_1.3.1 generics_0.1.3
## [69] rtracklayer_1.66.0 htmlwidgets_1.6.4
## [71] httr_1.4.7 S4Arrays_1.6.0
## [73] pkgconfig_2.0.3 gtable_0.3.6
## [75] blob_1.2.4 siggenes_1.80.0
## [77] htmltools_0.5.8.1 png_0.1-8
## [79] knitr_1.50 rstudioapi_0.17.1
## [81] tzdb_0.5.0 rjson_0.2.23
## [83] nlme_3.1-168 curl_6.2.2
## [85] cachem_1.1.0 rhdf5_2.50.0
## [87] stringr_1.5.1 KernSmooth_2.23-26
## [89] restfulr_0.0.15 GEOquery_2.74.0
## [91] pillar_1.10.2 grid_4.4.3
## [93] reshape_0.8.9 vctrs_0.6.5
## [95] xtable_1.8-4 evaluate_1.0.3
## [97] readr_2.1.5 cli_3.6.4
## [99] compiler_4.4.3 Rsamtools_2.22.0
## [101] rlang_1.1.5 crayon_1.5.3
## [103] rngtools_1.5.2 nor1mix_1.3-3
## [105] mclust_6.1.1 affy_1.84.0
## [107] plyr_1.8.9 stringi_1.8.7
## [109] viridisLite_0.4.2 BiocParallel_1.40.0
## [111] nleqslv_3.3.5 munsell_0.5.1
## [113] lazyeval_0.2.2 Matrix_1.7-3
## [115] hms_1.1.3 sparseMatrixStats_1.18.0
## [117] bit64_4.6.0-1 Rhdf5lib_1.28.0
## [119] KEGGREST_1.46.0 statmod_1.5.0
## [121] kernlab_0.9-33 memoise_2.0.1
## [123] affyio_1.76.0 bslib_0.9.0
## [125] bit_4.6.0